Borderline gradient continuity of minima

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Continuity of Minima: Local Results

This paper compares and generalizes Berge’s maximum theorem for noncompact image sets established in Feinberg, Kasyanov and Voorneveld [5] and the local maximum theorem established in Bonnans and Shapiro [3, Proposition 4.4].

متن کامل

Gradient Maximum Principle for Minima

We state a maximum principle for the gradient of the minima of integral functionals I (u)G Ω [ f (∇u)Cg(u)] dx, on ūCW 1,1 0 (Ω ), just assuming that I is strictly convex. We do not require that f, g be smooth, nor that they satisfy growth conditions. As an application, we prove a Lipschitz regularity result for constrained minima.

متن کامل

Global Lipschitz continuity for minima of degenerate problems

We consider the problem of minimizing the Lagrangian ∫ [F (∇u)+f u] among functions on Ω ⊂ R with given boundary datum φ. We prove Lipschitz regularity up to the boundary for solutions of this problem, provided Ω is convex and φ satisfies the bounded slope condition. The convex function F is required to satisfy a qualified form of uniform convexity only outside a ball and no growth assumptions ...

متن کامل

extremal region detection guided by maxima of gradient magnitude

a problem of computer vision applications is to detect regions of interest under dif- ferent imaging conditions. the state-of-the-art maximally stable extremal regions (mser) detects affine covariant regions by applying all possible thresholds on the input image, and through three main steps including: 1) making a component tree of extremal regions’ evolution (enumeration), 2) obtaining region ...

Convergence Rate of Stochastic Gradient Search in the Case of Multiple and Non-isolated Minima

The convergence rate of stochastic gradient search is analyzed in this paper. Using arguments based on differential geometry and Lojasiewicz inequalities, tight bounds on the convergence rate of general stochastic gradient algorithms are derived. As opposed to the existing results, the results presented in this paper allow the objective function to have multiple, non-isolated minima, impose no ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Fixed Point Theory and Applications

سال: 2014

ISSN: 1661-7738,1661-7746

DOI: 10.1007/s11784-014-0188-x